5?°C was observed.Vocal wow and tremor are slow modulations of the voice presumed to result from integration of auditory and somatosensory feedback, respectively. This distinction has important implications for diagnosis and treatment of neurological disorders that may differentially impact these systems, but the underlying mechanisms remain poorly understood. An important contribution on this matter is the reflex resonance model [Titze et al. (2002). J. Acoust. https://www.selleckchem.com/products/donafenib-sorafenib-d3.html Soc. Am. 111(5), 2272-2282], which demonstrates that a 4-7?Hz vibrato (or tremor) can indeed be elicited by adjusting feedback parameters in a simple model of laryngeal muscle activation, mediated by time-delayed somatosensory feedback. This paper expands on this model by incorporating an auditory feedback loop and shows that wow emerges as feedback parameters exceed critical values described by a Hopf bifurcation. The wow period increases with delay and is almost invariant with respect to gain for delays above 200?ms. Parametric formulas for recovering feedback parameters from the acoustic signal are presented. With both feedback loops in place, auditory and somatosensory parameters interact and alter vocal modulations. Model predictions are illustrated in two subjects, one with a diagnosis of multiple sclerosis and intermittent tremor. Findings suggest that phonatory instabilities provide considerable insight into normal and pathogenic changes to the sensorimotor control of voice.It has been reported that audible sounds can be heard behind a parametric array loudspeaker in free field, which cannot be predicted by existing models. A non-paraxial model is developed in this paper for the finite size and disk-shaped parametric source based on quasilinear approximation and disk scattering theory. The sounds on both front and back sides are calculated numerically and compared with the existing non-paraxial model for the parametric source installed in an infinitely large baffle. Both simulation and experiment results show that audible sound exists on the back side. The mechanism of the phenomenon is explored.The virtual head wave is produced through cross-correlation processing of signals containing the real, acoustic head wave. The virtual head wave has the same phase speed as the head wave, but the travel time is offset, thus the term virtual. The virtual head wave, like the real head wave, propagates in a direction corresponding to the seabed critical angle. The virtual head wave travel time varies with array depth and water column depth. However, in a refracting environment, the travel time is also dependent on the depth-dependent sound speed profile. Previously, the virtual head wave was shown as observable from measurements of ocean ambient noise, and the arrival angle was used to estimate the seabed sound speed. By also using the virtual head wave travel times, it is possible to invert for array depth and water column depth. The previous analysis was limited to the assumption of a Pekeris waveguide, which is a special case of the more realistic refracting waveguide. In this paper, the virtual head wave and the inversion method are considered in environments having refracting sound speeds. The theoretical framework and the inversion method are presented along with illustrative simulations and application to the Boundary'03 data.To capture the demands of real-world listening, laboratory-based speech-in-noise tasks must better reflect the types of speech and environments listeners encounter in everyday life. This article reports the development of original sentence materials that were produced spontaneously with varying vocal efforts. These sentences were extracted from conversations between a talker pair (female/male) communicating in different realistic acoustic environments to elicit normal, raised and loud vocal efforts. In total, 384 sentences were extracted to provide four equivalent lists of 16 sentences at the three efforts for the two talkers. The sentences were presented to 32 young, normally hearing participants in stationary noise at five signal-to-noise ratios from -8 to 0?dB in 2?dB steps. Psychometric functions were fitted for each sentence, revealing an average 50% speech reception threshold (SRT50) of -5.2?dB, and an average slope of 17.2%/dB. Sentences were then level-normalised to adjust their individual SRT50 to the mean (-5.2?dB). The sentences may be combined with realistic background noise to provide an assessment method that better captures the perceptual demands of everyday communication.In 2009-2014, autonomous hydrophones were deployed on established long-term moorings in the Fram Strait and Greenland Sea to record multi-year, seasonal occurrence of vocalizing cetaceans. Sei whales have rarely been observed north of ?72°N, yet there was acoustic evidence of sei whale presence in the Fram Strait for several months during all five years of the study. More sei whale calls were recorded at the easternmost moorings in the Fram Strait, likely because of the presence of warm Atlantic water and a strong front concentrating prey in this area. Sei whale vocalizations were not recorded at the Greenland Sea 2009-2010 mooring, either because this area is not part of the northward migratory path of sei whales or because oceanographic conditions were not suitable for foraging. No clear relationship between whale presence and water temperature data collected coincident with acoustic data was observed, but decadal time series of water temperature data collected in the eastern Fram Strait by others exhibit a warming trend, which may make conditions suitable for sei whales. Continued monitoring of the region will be required to determine if the presence of sei whales in these polar waters is ephemeral or a common occurrence.Echolocation signals emitted by odontocetes can be roughly classified into three broad categories broadband echolocation signals, narrowband high-frequency echolocation signals, and frequency modulated clicks. Previous measurements of broadband echolocation signal propagation in the bottlenose dolphin (Tursiops truncatus) did not find any evidence of focusing as the signals travel from the near-field to far-field. Finite element analysis (FEA) of high-resolution computed tomography scan data was used to examine signal propagation of broadband echolocation signals of dolphins and narrowband echolocation signals of porpoises. The FEA results were used to simulate the propagation of clicks from phonic lips, traveling through the forehead, and finally transmission into the water. Biosonar beam formation in the near-field and far-field, including the amplitude contours for the two species, was determined. The finite element model result for the simulated amplitude contour in the horizontal plane was consistent with prior direct measurement results for Tursiops, validating the model.